fix: enable multi-GPU DDP training in Jupyter notebooks#928
Merged
Borda merged 35 commits intoroboflow:developfrom Apr 8, 2026
Merged
fix: enable multi-GPU DDP training in Jupyter notebooks#928Borda merged 35 commits intoroboflow:developfrom
Borda merged 35 commits intoroboflow:developfrom
Conversation
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## develop #928 +/- ##
======================================
Coverage 79% 79%
======================================
Files 97 97
Lines 7793 7846 +53
======================================
+ Hits 6148 6195 +47
- Misses 1645 1651 +6 🚀 New features to boost your workflow:
|
Contributor
There was a problem hiding this comment.
Pull request overview
Fixes multi-GPU DDP training in interactive notebook environments by preventing early CUDA initialization and by transparently switching notebook DDP strategies away from fork to a spawn-based launcher/strategy.
Changes:
- Add a notebook-safe spawn-based DDPStrategy replacement for
ddp_notebook/ddp_spawnin the trainer factory. - Defer inference-model
.to(device)until first use via a new lazy device-placement helper. - Replace direct
torch.cuda.is_available()checks with a device constant intended to avoid CUDA context creation at import time, and update tests accordingly.
Reviewed changes
Copilot reviewed 9 out of 9 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
src/rfdetr/config.py |
Introduces _detect_device() / DEVICE to avoid CUDA runtime init at import time. |
src/rfdetr/inference.py |
Stops eager model .to(device) during model-context construction to prevent early CUDA init. |
src/rfdetr/detr.py |
Adds _ensure_model_on_device() and calls it from inference/export/optimize/auto-batch paths. |
src/rfdetr/training/trainer.py |
Maps ddp_notebook/ddp_spawn to a spawn-based, interactive-compatible DDP strategy. |
src/rfdetr/training/module_model.py |
Uses config.DEVICE for compile gating instead of torch.cuda.is_available(). |
src/rfdetr/training/module_data.py |
Uses config.DEVICE for pin_memory decisions; preserves configured num_workers. |
tests/training/test_build_trainer.py |
Adds coverage for spawn-based DDP mapping and ddp_notebook precision probing. |
tests/training/test_module_data.py |
Adds tests asserting num_workers/prefetch_factor preservation for strategies. |
tests/training/test_module_model.py |
Updates compile test to patch config.DEVICE instead of torch.cuda.is_available(). |
- Adds `hasattr(torch, "accelerator")` outer guard in `_detect_device()` so PyTorch < 2.4 (where `torch.accelerator` module does not exist) does not raise AttributeError at import time --- Co-authored-by: Claude Code <noreply@anthropic.com>
- Assertions are stripped with `python -O`; use explicit if+raise for required runtime guards --- Co-authored-by: Claude Code <noreply@anthropic.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
…inizar/rf-detr into fix/ddp-notebook-cuda-init
- _MultiProcessingLauncher has no public equivalent in PTL 2.x; adds a comment to monitor for breakage when bumping the PTL lower bound --- Co-authored-by: Claude Code <noreply@anthropic.com>
- Old docstring said "moves the model to the target device" and "ready for inference", both no longer true; model is kept on CPU and moved lazily by _ensure_model_on_device on first use --- Co-authored-by: Claude Code <noreply@anthropic.com>
- Satisfies static analysis requirements; function accepts duck-typed stand-ins, which Any correctly reflects --- Co-authored-by: Claude Code <noreply@anthropic.com>
…n DDP - torch.cuda.is_available() + is_bf16_supported() initialize CUDA in the parent; add a comment documenting this is intentional because all DDP paths use spawn, not fork --- Co-authored-by: Claude Code <noreply@anthropic.com>
- Inline comment inside build_trainer() was a near-verbatim repeat of the module-level block; replaced with a brief cross-reference --- Co-authored-by: Claude Code <noreply@anthropic.com>
…ect_device fallback - test_train_auto_batch_ensures_model_on_device_before_resolve: verifies device placement happens before auto-batch probing (detr.py:512-516) - test_detect_device_falls_back_when_torch_accelerator_absent: simulates PyTorch < 2.4 with no torch.accelerator module - test_detect_device_falls_back_when_current_accelerator_raises: covers RuntimeError catch path - test_detect_device_returns_cpu_when_no_gpu: covers CPU-only fallback --- Co-authored-by: Claude Code <noreply@anthropic.com>
--- Co-authored-by: Claude Code <noreply@anthropic.com>
…inizar/rf-detr into fix/ddp-notebook-cuda-init
Borda
reviewed
Apr 8, 2026
Borda
reviewed
Apr 8, 2026
Co-authored-by: Jirka Borovec <6035284+Borda@users.noreply.github.com>
- TestDetectDevice: use @patch decorator + MagicMock(spec=[]) to simulate missing current_accelerator without PropertyMock or class mutation - test_train_auto_batch_ensures_model_on_device_before_resolve: convert to @patch decorators, drop unused tmp_path, remove spurious rfdetr.detr.resolve_auto_batch_config patch (local import means only rfdetr.training.auto_batch is the correct target), explicit side_effect functions replacing fragile `lambda ... or` pattern --- Co-authored-by: Claude Code <noreply@anthropic.com>
…lly to @patch decorators - Remove inline 'import unittest.mock as mock' from test body - Add module-level 'from unittest.mock import MagicMock, patch' - Three context-manager patches → three @patch decorators - mock_trainer.side_effect replaces nested _fake_trainer closure --- Co-authored-by: Claude Code <noreply@anthropic.com>
Borda
previously approved these changes
Apr 8, 2026
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
- guard private PTL launcher import with clear runtime error path\n- respect explicit CPU accelerator when gating compile/pin_memory\n- fix optimize_for_inference CUDA-context tests on CPU builds\n- add focused regression tests for launcher compatibility and accelerator overrides\n\nCo-authored-by: OpenAI Codex <codex@openai.com>
Borda
approved these changes
Apr 8, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What does this PR do?
Fixes multi-GPU DDP training (
strategy="ddp_notebook"andstrategy="ddp_spawn") which was completely broken in Jupyter (e.g. Kaggle) notebook environments. The fix addresses two layers of issues:CUDA early initialization:
RFDETRBase()eagerly moved the model to CUDA during__init__(), and module-leveltorch.cuda.is_available()inconfig.pycreated a CUDA driver context at import time, making multi-process training impossible.OpenMP thread pool corruption after fork: Even after fixing CUDA init, PyTorch's OpenMP thread pool (created during model construction) cannot survive
fork(). The worker threads become zombie handles, causingSIGABRT: Invalid thread pool!when the autograd engine initializes in forked children. Fixed by transparently replacing fork-based DDP with a spawn-based strategy.Related Issue(s): Fixes #923
Type of Change
Testing
Test details:
Unit tests (101 pass locally)
test_build_trainer.py: 52 tests covering precision resolution, strategy selection, ddp_notebook→spawn mapping, EMA guards, logger wiringtest_module_data.py: 49 tests includingtest_ddp_notebook_preserves_num_workersandtest_other_strategy_preserves_num_workersIntegration test (Kaggle T4 x2)
Validated on Kaggle GPU T4 x2 accelerator (Python 3.12, PyTorch 2.10.0+cu128, PTL 2.6.1):
RFDETRBase()strategy="ddp_notebook"training (3 epochs, 2×T4)strategy="ddp_spawn"training (3 epochs, 2×T4)What This Fixes
model.train(devices=2, strategy="ddp_notebook")in notebookmodel.train(devices=2, strategy="ddp_spawn")in notebookmodel.train(devices=1)model.predict(img)model.train() → model.predict(img)model.export_onnx()/model.optimize_for_inference()Checklist
Additional Context
The
ddp_notebook → spawnconversion is transparent to users: they continue passingstrategy="ddp_notebook"(orstrategy="ddp_spawn") and training just works. An INFO log message is emitted:The
find_unused_parameters=Trueflag is required because RF-DETR's architecture has parameters in the detection head that may not contribute to every loss term (e.g. encoder-only auxiliary losses).Technical Details
Two layers of CUDA initialization that had to be fixed
Module-level (
config.py):torch.cuda.is_available()creates a CUDA driver context at import time. Fixed withtorch.accelerator.current_accelerator()which queries NVML without creating a primary context.Model construction (
inference.py):nn_model.to("cuda")fully initializes the CUDA runtime. Fixed by keeping the model on CPU and deferring.to(device)to firstpredict()/export()/batch_size="auto"call via_ensure_model_on_device().Why spawn instead of fork
PyTorch creates an OpenMP thread pool (default 8 threads) during the first tensor operation (model construction).
fork()only copies the calling thread, OMP worker threads become zombie handles. When the autograd engine in forked children callsset_num_threadsduringthread_init, the OMP runtime finds an invalid pool state and aborts:This is a fundamental fork+OMP incompatibility; as far as I know, there is no library-level workaround. The fix transparently replaces fork-based
ddp_notebookwith a spawn-based_NotebookSpawnDDPStrategywhose launcher is markedis_interactive_compatible = True, allowing PTL to accept it in notebook environments.Performance impact
predict()call: ~50-200ms one-time latency from CPU→GPU model transfer. Strictly one-time,_ensure_model_on_device()checksfirst_param.device != targetand becomes a no-op once the model is on GPU. Aftertrain(), the PTL-trained model is already on CUDA (synced at line 548), so even the first post-trainingpredict()has zero transfer cost.predict()calls: Zero overhead (singlenext(parameters()).devicecomparison)RFDETRBase() → predict()without training): The one-time transfer happens on the very first call only. All subsequent calls, including batch evaluation loops, are zero-overhead.